192 research outputs found

    Security from Location

    Get PDF

    Receiver Performance for an Enhanced DGPS Data Channel

    Get PDF
    The Coast Guard currently operates a maritime differential GPS service consisting of two control centers and over 85 remote broadcast sites. This service broadcasts GPS correction information on marine radiobeacon frequencies to improve the accuracy and integrity of GPS. The existing system provides differential corrections over a medium frequency carrier using minimum shift keying (MSK) as the modulation method. MSK is a version of the Continuous Phase Frequency Shift Keying (CPFSK) modulation technique that is spectrally compact, meaning that it is a narrow band modulation scheme. In a binary signaling channel, the two instantaneous frequencies for this modulation method are chosen in such a way so as to produce orthogonal signaling with a minimum modulation index. Current DGPS corrections are transmitted at a relatively low data rate, with message structures designed in an era when Selective Availability was in full operation. Greater demands for accuracy coupled with current operations in a post SA environment have prompted a reexamination of the DGPS data and signal structure, with an eye towards improving information rate while minimizing legacy user impact. A two-phased plan for a new generation of DGPS capability can be envisioned. In the first phase (near-term) new ionospheric messages would be introduced to allow greater DGPS accuracy at larger distances from the beacons. This capability could support both double (LI/L2) and triple (L1/L2/L5) frequency operation. This phase requires only the definition of the new message type(s) and the commitment of receiver manufacturers to implement the usage of the new data. In the second phase (intermediate future) a new signal would come on line to support RTK using two and three frequencies and homeland security messaging. This signal would have the capacity to send 500 bps or so without disrupting the legacy signal or legacy receiver performance. This new signal could be one of the new modulation techniques that we have been investigating; phase trellis overlay and orthogonal frequency division multiplexing. Preliminary examinations of both of these techniques have shown the potential for increased bandwidth usage (ION NTM Jan. 2004), the effects on legacy receiver performance through a modulator test-bed (ION AM June 2004), and some effects of an actual transmitter (including antenna and coupler) on the signal (ION GNSS Sept 2004). The current paper describes recent investigations into the architecture of the receivers for these modulation methods including details of the demodulation and decoding methods. We also establish receiver performance measures and present preliminary performance results. Reprinted with permission from The Institute of Navigation (http://ion.org/) and The Proceedings of the 18th International Technical Meeting of the Satellite Division of The Institute of Navigation, (pp. 788-800). Fairfax, VA: The Institute of Navigation

    Reliable Location-Based Services from Radio Navigation Systems

    Get PDF
    Loran is a radio-based navigation system originally designed for naval applications. We show that Loran-C’s high-power and high repeatable accuracy are fantastic for security applications. First, we show how to derive a precise location tag—with a sensitivity of about 20 meters—that is difficult to project to an exact location. A device can use our location tag to block or allow certain actions, without knowing its precise location. To ensure that our tag is reproducible we make use of fuzzy extractors, a mechanism originally designed for biometric authentication. We build a fuzzy extractor specifically designed for radio-type errors and give experimental evidence to show its effectiveness. Second, we show that our location tag is difficult to predict from a distance. For example, an observer cannot predict the location tag inside a guarded data center from a few hundreds of meters away. As an application, consider a location-aware disk drive that will only work inside the data center. An attacker who steals the device and is capable of spoofing Loran-C signals, still cannot make the device work since he does not know what location tag to spoof. We provide experimental data supporting our unpredictability claim

    A Real-Time Capable Software-Defined Receiver Using GPU for Adaptive Anti-Jam GPS Sensors

    Get PDF
    Due to their weak received signal power, Global Positioning System (GPS) signals are vulnerable to radio frequency interference. Adaptive beam and null steering of the gain pattern of a GPS antenna array can significantly increase the resistance of GPS sensors to signal interference and jamming. Since adaptive array processing requires intensive computational power, beamsteering GPS receivers were usually implemented using hardware such as field-programmable gate arrays (FPGAs). However, a software implementation using general-purpose processors is much more desirable because of its flexibility and cost effectiveness. This paper presents a GPS software-defined radio (SDR) with adaptive beamsteering capability for anti-jam applications. The GPS SDR design is based on an optimized desktop parallel processing architecture using a quad-core Central Processing Unit (CPU) coupled with a new generation Graphics Processing Unit (GPU) having massively parallel processors. This GPS SDR demonstrates sufficient computational capability to support a four-element antenna array and future GPS L5 signal processing in real time. After providing the details of our design and optimization schemes for future GPU-based GPS SDR developments, the jamming resistance of our GPS SDR under synthetic wideband jamming is presented. Since the GPS SDR uses commercial-off-the-shelf hardware and processors, it can be easily adopted in civil GPS applications requiring anti-jam capabilities

    Failure Detection and Exclusion via Range Consensus

    Get PDF
    With the rise of enhanced GNSS services over the next decade (i.e. the modernized GPS, Galileo, GLONASS, and Compass constellations), the number of ranging sources (satellites) available for a positioning will significantly increase to more than double the current value. One can no longer assume that the probability of failure for more than one satellite within a certain timeframe is negligible. To ensure that satellite failures are detected at the receiver is of high importance for the integrity of the satellite navigation system. With a large number of satellites, it will be possible to reduce multipath effects by excluding satellites with a pseudorange bias above a certain threshold. The scope of this work is the development of an algorithm that is capable of detecting and identifying all such satellites with a bias higher than a given threshold. The Multiple Hypothesis Solution Separation (MHSS) RAIM Algorithm (Ene, 2007; Pervan, et al., 1998) is one of the existing approaches to identify faulty satellites by calculating the Vertical Protection Level (VPL) for subsets of the constellation that omit one or more satellites. With the aid of the subset showing the best (or minimum) VPL, one can expect to detect satellite faults if both the ranging error and its influence on the position solution are significant enough. At the same time, there are geometries and range error distributions where a different satellite, other than the faulty one, can be excluded to minimize the VPL. Nevertheless, with multiple constellations present, one might want to exclude the failed satellite, even if this does not always result in the minimum VPL value, as long as the protection level stays below the Vertical Alert Limit (VAL). The Range Consensus (RANCO) algorithm, which is developed in this work, calculates a position solution based on four satellites and compares this estimate with the pseudoranges of all the satellites that did not contribute to this solution. The residuals of this comparison are then used as a measure of statistical consensus. The satellites that have a higher estimated range error than a certain threshold are identified as outliers, as their range measurements disagree with the expected pseudoranges by a significant amount given the position estimate. All subsets of four satellites that have an acceptable geometric conditioning with respect to orthogonality will be considered. Hence, the chances are very high that a subset of four satellites that is consistent with all the other “healthy” satellites will be found. The subset with the most inliers is consequently utilized for identification of the outliers in the combined constellation. This approach allows one to identify as many outliers as the number of satellites in view minus four satellites for the estimation, and minus at least one additional satellite, that confirms this estimation. As long as more than four plus at least one satellites in view are consistent with respect to the pseudoranges, one can reliably exclude the ones that have a bias higher than the threshold. This approach is similar to the Random Sample Consensus Algorithm (RANSAC), which is applied for computer vision tasks (Fischler, et al., 1981), as well as previous Range Comparison RAIM algorithms (Lee, 1986). The minimum necessary bias in the pseudorange that allows RANCO to separate between outliers and inliers is smaller than six times the variance of the expected error. However, it can be made even smaller with a second variant of the algorithm proposed in this work, called Suggestion Range Consensus (S-RANCO). In S-RANCO, the number of times when a satellite is not an inlier of a set of four different satellites is computed. This approach allows the identification of a possibly faulty satellite even when only lower ranging biases are introduced as an effect of the fault. The batch of satellite subsets to be examined is preselected by a very fast algorithm that considers the alignment of the normal vectors between the receiver and the satellite (first 3 columns of the geometry matrix). Concerning the computational complexity, only 4 by 4 matrices are being inverted as part of both algorithms. With the reliable detection and identification of multiple satellites producing very low ranging biases, the resulting information will also be very useful for existing RAIM Fault Detection and Elimination (FDE) algorithms (Ene, et al., 2007; Walter, et al., 1995)

    An overview of GBAS integrity monitoring with a focus on ionospheric spatial anomalies

    No full text
    249-260The Local Area Augmentation System (LAAS) or, more generally, the Ground Based Augmentation System (GBAS), has been developed over the past decade to meet the accuracy, integrity, continuity and availability needs of civil aviation users. The GBAS utilizes a single reference station (with multiple GNSS receivers and antennas) within an airport and provides differential corrections via VHF data broadcast (VDB) within a 50-km region around that airport. This paper provides an overview of GBAS integrity verification, explaining how integrity risk is allocated to various potential safety threats and how monitors are used to meet these allocations. In order to illustrate GBAS integrity monitoring in detail, this paper examines the potential threat of ionospheric spatial anomalies (e.g., during ionospheric “storms”) to GBAS and how GBAS protects users against this threat. In practice, the need to mitigate potential ionospheric anomalies is what dictates CAT I GBAS availability

    Using Outage History to Exclude High-Risk Satellites from GBAS Corrections

    No full text
    ABSTRACT GNSS augmentation systems that provide integrity guarantees to users typically assume that all GNSS satellites have the same failure probability. The assumed failure probability is conservative such that variations among satellites in a given GNSS constellation are not expected to violate this assumption. A study of unscheduled GPS satellite outages from 1999 to present shows that, as expected, older satellites are much more likely to fail than younger ones. In addition, satellites that have recently experienced unscheduled outages are more likely to suffer additional unscheduled outages. Combining these two factors suggests that it is possible for a subset of GPS satellites to violate the overall satellite failure probability assumption, although this has not yet been demonstrated. Potential rules for GPS satellite exclusion based upon satellite age and recent outages are investigated, and suggestions for including satellite geometry are explored. INTRODUCTION GNSS applications with demanding requirements for realtime integrity verification must make a series of assumptions regarding the performance of the satellite constellation(s) that they are using. One key assumption is the probability of unexpected satellite outages or failures. Integrity monitors that operate directly on standalone (uncorrected) GNSS measurements, such as Receiver Autonomous Integrity Monitoring (RAIM), as well as systems that provide differential corrections such as Space Based and Ground Based Augmentation Systems (SBAS and GBAS, respectively), rely on this assumption to determine the false-alert and misseddetection probabilities that their integrity monitor algorithms must achieve. Regarding satellite failures, two different probabilities are important. One is the probability of any unexpected satellite outage, which makes the affected satellite unusable and thus affects continuity. The other is the probability of events that pose a potential integrity risk to SBAS and GBAS users. These probabilities can be represented as rates (e.g., probability of outage per satellite per hour) or as state probabilities, meaning the long-term average probability that a given satellite is in an "outage" or "failed" state. For the purposes of verifying that integrity and continuity requirements are met, the systems mentioned above assume that all satellites have the same probability of outage or integrity failure. This assumption is made for simplicity, and the probabilities assumed are typically very conservative; thus little or no risk arises due to potential violations of this assumption. However, it is known from the history and planning of GPS satellite operations (and satellite operations in general) that older satellites are much more likely to fail than younger ones. This was captured earlier in the history of the GPS constellation by former GPS Joint Program Office director, Col. Gaylord Green (USAF, Ret.), who noted that "GPS satellites are operated to failure." [1] By this, he meant that GPS satellites were not retired when they first began experiencing problems or approaching the end of their expected useful life but instead when they failed in a manner that was not recoverable or was recoverable but no longer maintainable. This means that older satellites, and those which have recently experienced outages, will generally keep being used despite their higher propensity for further failures. This paper examines the degree to which unexpected GPS satellite outages and failures vary with satellite age and prior outage history. It uses the archive of GPS NonAvailability notices to NAVSTAR Users (NANUs) to compile a history of unexpected satellite outages from January 1999 to August 2011 While these results do not immediately suggest that the satellite-fault-probability assumptions made by GBAS and other systems are violated for specific satellites, they at least raise the possibility. To address this risk, two potential satellite-exclusion heuristics are examined i
    corecore